首页> 外文OA文献 >What You Sketch Is What You Get: 3D Sketching using Multi-View Deep Volumetric Prediction
【2h】

What You Sketch Is What You Get: 3D Sketching using Multi-View Deep Volumetric Prediction

机译:您所绘制的内容是什么:使用multi-View Deep进行3D草图绘制   体积预测

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

Sketch-based modeling strives to bring the ease and immediacy of drawing tothe 3D world. However, while drawings are easy for humans to create, they arevery challenging for computers to interpret due to their sparsity andambiguity. We propose a data-driven approach that tackles this challenge bylearning to reconstruct 3D shapes from one or more drawings. At the core of ourapproach is a deep convolutional neural network (CNN) that predicts occupancyof a voxel grid from a line drawing. This CNN provides us with an initial 3Dreconstruction as soon as the user completes a single drawing of the desiredshape. We complement this single-view network with an updater CNN that refinesan existing prediction given a new drawing of the shape created from a novelviewpoint. A key advantage of our approach is that we can apply the updateriteratively to fuse information from an arbitrary number of viewpoints, withoutrequiring explicit stroke correspondences between the drawings. We train bothCNNs by rendering synthetic contour drawings from hand-modeled shapecollections as well as from procedurally-generated abstract shapes. Finally, weintegrate our CNNs in a minimal modeling interface that allows users toseamlessly draw an object, rotate it to see its 3D reconstruction, and refineit by re-drawing from another vantage point using the 3D reconstruction asguidance. The main strengths of our approach are its robustness to freehandbitmap drawings, its ability to adapt to different object categories, and thecontinuum it offers between single-view and multi-view sketch-based modeling.
机译:基于草图的建模致力于将绘图的便捷性和即时性带入3D世界。但是,尽管人类很容易创建图纸,但由于它们的稀疏性和歧义性,它们对计算机的解释非常具有挑战性。我们提出一种数据驱动的方法,通过学习从一个或多个图形中重建3D形状来应对这一挑战。我们方法的核心是深度卷积神经网络(CNN),可根据线条图预测体素网格的占用率。用户完成所需形状的单个图形后,此CNN即可为我们提供初始3D重建。我们用更新器CNN对该单视图网络进行补充,该更新器在给定从Novelviewpoint创建的形状的新图形的情况下完善现有的预测。我们的方法的主要优势在于,我们可以从任意数量的角度以更新方式应用融合信息,而无需在图形之间使用明确的笔划对应关系。我们通过从手工建模的形状集合以及程序生成的抽象形状渲染合成轮廓图来训练这两个CNN。最后,我们将CNN集成到最小的建模界面中,该界面允许用户无缝绘制对象,旋转对象以查看其3D重建,并通过使用3D重建指导从另一个有利位置重新绘制来进行优化。我们方法的主要优点是它对徒手绘制的位图的鲁棒性,适应不同对象类别的能力以及在基于单视图和多视图的草图建模之间提供的连续体。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号